Skip to content

Conversation

hankfreund
Copy link

  • Other comments: No material changes have been made to either KEP. Content was removed from one or the other and grammar has been updated to make sense.

@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Sep 30, 2025
@k8s-ci-robot
Copy link
Contributor

Welcome @hankfreund!

It looks like this is your first PR to kubernetes/enhancements 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.

You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.

You can also check if kubernetes/enhancements has its own contribution guidelines.

You may want to refer to our testing guide if you run into trouble with your tests not passing.

If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!

Thank you, and welcome to Kubernetes. 😃

@k8s-ci-robot k8s-ci-robot added the kind/kep Categorizes KEP tracking issues and PRs modifying the KEP directory label Sep 30, 2025
@k8s-ci-robot
Copy link
Contributor

Hi @hankfreund. Thanks for your PR.

I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added the needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. label Sep 30, 2025
@k8s-ci-robot k8s-ci-robot added sig/node Categorizes an issue or PR as relevant to SIG Node. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. labels Sep 30, 2025
@tallclair
Copy link
Member

/ok-to-test

@k8s-ci-robot k8s-ci-robot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Oct 1, 2025
Copy link
Member

@tallclair tallclair left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need to add a prod readiness file: keps/prod-readiness/sig-node/5593.yaml


#### Beta

- Gather feedback from developers and surveys
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we have any feedback? I'm not sure we want to block the beta on this.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed.

will rollout across nodes.
-->

<<[UNRESOLVED beta]>> Fill out when targeting beta to a release. <<[/UNRESOLVED]>>
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we're targetting beta this release, this needs to be filled out. Or were you planning to cover this in a follow-up PR?

Risk is that a configured crashloop backoff causes the kubelet to become unstable. If that happens, a rollback just requires updating the config and restarting kubelet.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wasn't sure initially if I should do it all in one, but it makes sense. Updated this and all the following sections.

@@ -0,0 +1,3 @@
kep-number: 5593
alpha:
approver: TBD
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was already approved for alpha. You're just splitting the previous enhancement into 2 parts. I think you can put @soltysh here (copied from https://github.com/kubernetes/enhancements/blob/511963e97f955f97e9842ae3015b60af956539b3/keps/prod-readiness/sig-node/4603.yaml)

* `kubelet_pod_start_sli_duration_seconds`


###### Were upgrade and rollback tested? Was the upgrade->downgrade->upgrade path tested?
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, but I think you can just put N/A here. This feature is stateless.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can agree with the stateless fact, but I need those feature on/off tests linked in the previous sections. Then update this section mentioning that b/c it's stateless it's sufficient to verify that turning the feature gate on and off works as expected.

Copy link
Contributor

@lauralorenz lauralorenz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few comments focused on clean separation between the two KEPs

(Success) and the pod is transitioned into a "Completed" state or the expected
length of the pod run is less than 10 minutes.

This KEP proposes the following changes:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit

Suggested change
This KEP proposes the following changes:
This KEP proposes the following change:

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.


Some observations and analysis were made to quantify these risks going into
alpha. In the [Kubelet Overhead Analysis](#kubelet-overhead-analysis), the code
paths all restarting pods go through result in 5 obvious `/pods` API server
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can't comment directly because its out of range of the diff but lines 499-508 contain references to the per Node feature still

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I went through and I think I got all of them.

included in the `config.validation_test` package.
### Rollout, Upgrade and Rollback Planning
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On and after line 1149 (ref) in the Scalability section, the per Node feature is referenced, I suggest linking out to the other KEP there for inline context

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

This KEP proposes the following changes:
* Provide a knob to cluster operators to configure maximum backoff down, to
minimum 1s, at the node level
* Formally split image pull backoff and container restart backoff behavior
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this bullet point still be in the other KEP as well since it was also done alongside that (or alternatively, should this bullet point be taken out of the top level content for both)? I think it was important to include references to these refactorings above the fold during the alpha phase so it was clear what was happening, but less important now that the alpha is implemented

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think removing it is all right.

rate limiting made up the gap to the stability of the system. Therefore, to
simplify both the implementation and the API surface, this 1.32 proposal puts
forth that the opt-in will be configured per node via kubelet configuration.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now that this is the only feature referred to in this KEP, I feel like this section would read better with a subheading here like ### Implementing with KubeletConfiguration or something. Before it was all smooshed together since there were already so many H3s lol but that's not the case anymore

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.


All behavior changes are local to the kubelet component and its start up
configuration, so a mix of different (or unset) max backoff durations will not
cause issues.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just noticed that this sentence is kinda vague.

Suggested change
cause issues.
cause issues to running workloads.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

* Formally split backoff counter reset threshold for container restart backoff
behavior and maintain the current 10 minute recovery threshold
* Provide an alpha-gated change to get feedback and periodic scalability tests
on changes to the global initial backoff to 1s and maximum backoff to 1 minute
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider adding a sentence to both Overviews about how this was originally a bigger KEP that has been split out into two, and link to the other one there, so its quickly in context for new or returning readers.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

* Formally split image pull backoff and container restart backoff behavior
* Formally split backoff counter reset threshold for container restart backoff
behavior and maintain the current 10 minute recovery threshold

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

X-post from the other one: Consider adding a section to both Overviews about how this was originally a bigger KEP that has been split out into two, and link to the other one there, so its quickly in context for new or returning readers.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

@hankfreund hankfreund force-pushed the clb_split_kep branch 3 times, most recently from 1d0be4d to 684ee23 Compare October 6, 2025 22:11
reviewers:
- "@tallclair"
approvers:
- TBD
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mrunalp can you take this?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sure

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@hankfreund please add @mrunalp here

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done!

question.
-->

<<[UNRESOLVED beta]>> Fill out when targeting beta to a release. <<[/UNRESOLVED]>>
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

need to be filled up

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same as below

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Got it. I think all the required sections are filled out now.

Copy link
Member

@SergeyKanzhelev SergeyKanzhelev left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

from sig node perspective - just copy of what we got to alpha already. Pretty straightforward.

Couple of PRR questions are unanswered

@SergeyKanzhelev
Copy link
Member

/assign @mrunalp

@SergeyKanzhelev
Copy link
Member

/lgtm

(except approver needs to be listed)

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Oct 14, 2025
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: hankfreund, mrunalp
Once this PR has been reviewed and has the lgtm label, please assign jpbetz for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

Tuning and benchmarking a new crashloopbackoff decay will take a lot of
work. In the meantime, everyone can benefit from a per-node configurable
max crashloopbackoff delay. Splitting the KEP into two KEPs to allow for
graduating the latter to beta before the former.
This KEP is mostly a copy of keps/sig-node/4603-tune-crashloopbackoff
with all the tuning bits removed (and grammar adjusted to make sense).
The desire is to advance this KEP to beta sooner than we'd be able to
advance the other one.
@k8s-ci-robot k8s-ci-robot removed the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Oct 14, 2025
@SergeyKanzhelev
Copy link
Member

/assign @soltysh

/lgtm

@soltysh to clarify - this is forking of a KEP with two feature gates into two KEPs - one per FG. The one that is moving to beta is only about a new config paramter that controls the CrashLoopBackoffPeriod

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Oct 14, 2025
Copy link
Contributor

@soltysh soltysh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From a PRR pov mostly missing test links.

# The following PRR answers are required at alpha release
# List the feature gate name and the components for which it must be enabled
feature-gates:
- name: ReduceDefaultCrashLoopBackoffDecay
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: but above there's see-also section you could update to point to the other KEP:

 see-also:
   - "/keps/sig-node/5593-configure-the-max-crashloopbackoff-delay"

title: Configure the max CrashLoopBackOff delay
kep-number: 5593
authors:
- "@lauralorenz"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: I'm assuming a lot is copied from the other KEP, but still I'd add hankfreund here

- [ ] (R) Production readiness review approved
- [ ] "Implementation History" section is up-to-date for milestone
- [ ] User-facing documentation has been created in [kubernetes/website], for publication to [kubernetes.io]
- [ ] Supporting documentation—e.g., additional design documents, links to mailing list discussions/SIG meetings, relevant PRs/issues, release notes
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ni: please make sure to update this checklist, ✔️ the appropriate ones.

[testgrid](https://testgrid.k8s.io/sig-testing-canaries#pull-kubernetes-integration-go-canary),
[latest
prow](https://prow.k8s.io/view/gs/kubernetes-jenkins/pr-logs/directory/pull-kubernetes-integration-go-canary/1710565150676750336)
* test with and without feature flags enabled
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you update these links so they point to the exact tests verifying feature gate on and off? Are there any additional integration tests for this feature, if yes please provide here the necessary links.

We expect no non-infra related flakes in the last month as a GA graduation criteria.
-->

- Crashlooping container that restarts some number of times (ex 10 times),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the graduation criteria below you're mentioning e2e for alpha, can you update this section with appropriate links to specific tests?


No coordination needs to be done between the control plane and the nodes; all
behavior changes are local to the kubelet component and its start up
configuration. An n-3 kube-proxy, n-1kube-controller-manager, or n-1
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
configuration. An n-3 kube-proxy, n-1kube-controller-manager, or n-1
configuration. An n-3 kube-proxy, n-1 kube-controller-manager, or n-1

and discussions with other contributors indicate that while little in core
kubernetes does strict parsing, it's not well tested. At minimum as part of this
implementation a test covering this for `KubeletConfiguration` objects will be
included in the `config.validation_test` package.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since this seems copied from the other KEP, were those tests added, if so can you link them here?

* `kubelet_pod_start_sli_duration_seconds`


###### Were upgrade and rollback tested? Was the upgrade->downgrade->upgrade path tested?
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can agree with the stateless fact, but I need those feature on/off tests linked in the previous sections. Then update this section mentioning that b/c it's stateless it's sufficient to verify that turning the feature gate on and off works as expected.

implementation difficulties, etc.).
-->

N/A
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No will be better answer.

-->

Maybe! As containers could be restarting more, this may affect "Startup latency
of schedulable stateless pods", "Startup latency of schedule stateful pods".
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have you performed any measurements as to how significant that degradation can be? Similarly how you provided rough estimations for increased CPU usage in the next question.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/kep Categorizes KEP tracking issues and PRs modifying the KEP directory lgtm "Looks good to me", indicates that a PR is ready to be merged. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. sig/node Categorizes an issue or PR as relevant to SIG Node. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants